47 research outputs found

    On the conditioning of multipoint and integral boundary value problems

    Get PDF
    Linear multipoint boundary value problems are investigated from the point of view of the condition number and properties of the fundamental solution. It is found that when the condition number is not large, the solution space is polychotomic. On the other hand, if the solution space is polychotomic then there exist boundary conditions such that the associated boundary value problem is well conditioned

    A note on polychomoty

    Get PDF

    A note on subset selection for matrices

    Get PDF
    In an earlier paper the authors established a result to select subsets of a matrix that are as "non-singular" as possible in a numerical sense. The major result was not constructive. In this note we give a constructive proof and moreover a sharper bound

    Asymptotic approximations for vibrational modes of helices

    Get PDF
    The free vibrations in the plane normal to the helical axis are studied under the assumption that the helical pitch is small. Asymptotic approximations for eigenvalues and eigenfunctions are derived for both small and large numbers of helical turns. The analytic approximations reveal interesting features of helix vibrations and the connection between the vibrational modes of a helix and the flexural modes of a curved beam. Comparison with numerical calculations shows that the approximations derived cover with sufficient accuracy a wide range of number of helical turns

    A weakly stable algorithm for general Toeplitz systems

    Full text link
    We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A. Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx = A^Tb, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the full-rank Toeplitz or Hankel least squares problem.Comment: 17 pages. An old Technical Report with postscript added. For further details, see http://wwwmaths.anu.edu.au/~brent/pub/pub143.htm

    Smoothing and Matching of 3-D Space Curves

    Get PDF
    International audienceWe present a new approach to the problem of matching 3-D curves. The approach has a low algorithmic complexity in the number of models, and can operate in the presence of noise and partial occlusions. Our method builds upon the seminal work of Kishon et al. (1990), where curves are first smoothed using B-splines, with matching based on hashing using curvature and torsion measures. However, we introduce two enhancements: -- We make use of nonuniform B-spline approximations, which permits us to better retain information at highcurvature locations. The spline approximations are controlled (i.e., regularized) by making use of normal vectors to the surface in 3-D on which the curves lie, and by an explicit minimization of a bending energy. These measures allow a more accurate estimation of position, curvature, torsion, and Frtnet frames along the curve. -- The computational complexity of the recognition process is relatively independent of the number of models and is considerably decreased with explicit use of the Frtnet frame for hypotheses generation. As opposed to previous approaches, the method better copes with partial occlusion. Moreover, following a statistical study of the curvature and torsion covariances, we optimize the hash table discretization and discover improved invariants for recognition, different than the torsion measure. Finally, knowledge of invariant uncertainties is used to compute an optimal global transformation using an extended Kalman filter. We present experimental results using synthetic data and also using characteristic curves extracted from 3-D medical images. An earlier version of this article was presented at the 2nd European Conference on Computer Vision in Italy

    Efficient algorithms for robust generalized cross-validation spline smoothing

    Get PDF
    Generalized cross-validation (GCV) is a widely used parameter selection criterion for spline smoothing, but it can give poor results if the sample size n is not sufficiently large. An effective way to overcome this is to use the more stable criterion called robust GCV (RGCV). The main computational effort for the evaluation of the GCV score is the trace of the smoothing matrix, tr A, while the RGCV score requires both tr A and tr A(2). Since 1985, there has been an efficient O(n) algorithm to compute tr A. This paper develops two pairs of new O(n) algorithms to compute tr A and tr A(2), which allow the RGCV score to be calculated efficiently. The algorithms involve the differentiation of certain matrix functionals using banded Cholesky decomposition

    Performance of robust GCV and modified GCV for spline smoothing

    Get PDF
    While it is a popular selection criterion for spline smoothing, generalized cross-validation (GCV) occasionally yields severely undersmoothed estimates. Two extensions of GCV called robust GCV (RGCV) and modified GCV have been proposed as more stable criteria. Each involves a parameter that must be chosen, but the only guidance has come from simulation results. We investigate the performance of the criteria analytically. In most studies, the mean square prediction error is the only loss function considered. Here, we use both the prediction error and a stronger Sobolev norm error, which provides a better measure of the quality of the estimate. A geometric approach is used to analyse the superior small-sample stability of RGCV compared to GCV. In addition, by deriving the asymptotic inefficiency for both the prediction error and the Sobolev error, we find intervals for the parameters of RGCV and modified GCV for which the criteria have optimal performance

    Differentiation of matrix functionals using triangular factorization

    Get PDF
    In various applications, it is necessary to differentiate a matrix functional w(A(x)), where A(x) is a matrix depending on a parameter vector x. Usually, the functional itself can be readily computed from a triangular factorization of A(x). This paper develops several methods that also use the triangular factorization to efficiently evaluate the first and second derivatives of the functional. Both the full and sparse matrix situations are considered. There are similarities between these methods and algorithmic differentiation. However, the methodology developed here is explicit, leading to new algorithms. It is shown how the methods apply to several applications where the functional is a log determinant, including spline smoothing, covariance selection and restricted maximum likelihood
    corecore